Continual Reinforcement Learning with Complex Synapses

نویسندگان

  • Christos Kaplanis
  • Murray Shanahan
  • Claudia Clopath
چکیده

Unlike humans, who are capable of continual learning over their lifetimes, artificial neural networks have long been known to suffer from a phenomenon known as catastrophic forgetting, whereby new learning can lead to abrupt erasure of previously acquired knowledge. Whereas in a neural network the parameters are typically modelled as scalar values, an individual synapse in the brain comprises a complex network of interacting biochemical components that evolve at different timescales. In this paper, we show that by equipping tabular and deep reinforcement learning agents with a synaptic model that incorporates this biological complexity (Benna & Fusi, 2016), catastrophic forgetting can be mitigated at multiple timescales. In particular, we find that as well as enabling continual learning across sequential training of two simple tasks, it can also be used to overcome within-task forgetting by reducing the need for an experience replay database.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

First Step Towards Continual LearningMARK

Continual learning is the constant development of increasingly complex behaviors; the process of building more complicatedskills on top of those already developed. A continual-learning agent should therefore learn incrementally and hierarchically. This paper describes CHILD, an agent capable of Continual, Hierarchical, Incremental Learning and Development. CHILD can quickly solve complicated no...

متن کامل

Dex: Incremental Learning for Complex Environments in Deep Reinforcement Learning

This paper introduces Dex, a reinforcement learning environment toolkit specialized for training and evaluation of continual learning methods as well as general reinforcement learning problems. We also present the novel continual learning method of incremental learning, where a challenging environment is solved using optimal weight initialization learned from first solving a similar easier envi...

متن کامل

Continual Learning for Mobile Robots

Autonomous mobile robots should be able to learn incrementally and adapt to changes in the operating environment during their entire lifetime. This is referred to as continual learning. In this thesis, I propose an approach to continual learning which is based on adaptive state-space quantisation and reinforcement learning. Representational tools for continual learning should be constructive, a...

متن کامل

Continual Learning through Evolvable Neural Turing Machines

Continual learning, i.e. the ability to sequentially learn tasks without catastrophic forgetting of previously learned ones, is an important open challenge in machine learning. In this paper we take a step in this direction by showing that the recently proposed Evolving Neural Turing Machine (ENTM) approach is able to perform one-shot learning in a reinforcement learning task without catastroph...

متن کامل

Global Reinforcement Learning in Neural Networks with Stochastic Synapses [IJCNN1372]

We have found a more general formulation of the REINFORCE learning principle which had been proposed by R. J. Williams for the case of artificial neural networks with stochastic cells (“Boltzmann machines”). This formulation has enabled us to apply the principle to global reinforcement learning in networks with deterministic neural cells but stochastic synapses, and to suggest two groups of new...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1802.07239  شماره 

صفحات  -

تاریخ انتشار 2018